We present a human-in-the-loop evaluation framework for fact-checking novel misinformation claims and identifying social media messages that violate relevant policies. Our approach extracts structured representations of check-worthy claims, which are aggregated and ranked for review. Stance classifiers are then used to identify tweets supporting novel misinformation claims, which are further reviewed to determine whether they violate relevant policies. To demonstrate the feasibility of our approach, we develop a baseline system based on modern NLP methods for human-in-the-loop fact-checking in the domain of COVID-19 treatments. Using our baseline system, we show that human fact-checkers can identify 124 tweets per hour that violate Twitter's policies on COVID-19 misinformation. We will make our code, data, and detailed annotation guidelines available to support the evaluation of human-in-the-loop systems that identify novel misinformation directly from raw user-generated content.
translated by 谷歌翻译
Automatic defect detection for 3D printing processes, which shares many characteristics with change detection problems, is a vital step for quality control of 3D printed products. However, there are some critical challenges in the current state of practice. First, existing methods for computer vision-based process monitoring typically work well only under specific camera viewpoints and lighting situations, requiring expensive pre-processing, alignment, and camera setups. Second, many defect detection techniques are specific to pre-defined defect patterns and/or print schematics. In this work, we approach the automatic defect detection problem differently using a novel Semi-Siamese deep learning model that directly compares a reference schematic of the desired print and a camera image of the achieved print. The model then solves an image segmentation problem, identifying the locations of defects with respect to the reference frame. Unlike most change detection problems, our model is specially developed to handle images coming from different domains and is robust against perturbations in the imaging setup such as camera angle and illumination. Defect localization predictions were made in 2.75 seconds per layer using a standard MacBookPro, which is comparable to the typical tens of seconds or less for printing a single layer on an inkjet-based 3D printer, while achieving an F1-score of more than 0.9.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Neural network-based approaches for solving partial differential equations (PDEs) have recently received special attention. However, the large majority of neural PDE solvers only apply to rectilinear domains, and do not systematically address the imposition of Dirichlet/Neumann boundary conditions over irregular domain boundaries. In this paper, we present a framework to neurally solve partial differential equations over domains with irregularly shaped (non-rectilinear) geometric boundaries. Our network takes in the shape of the domain as an input (represented using an unstructured point cloud, or any other parametric representation such as Non-Uniform Rational B-Splines) and is able to generalize to novel (unseen) irregular domains; the key technical ingredient to realizing this model is a novel approach for identifying the interior and exterior of the computational grid in a differentiable manner. We also perform a careful error analysis which reveals theoretical insights into several sources of error incurred in the model-building process. Finally, we showcase a wide variety of applications, along with favorable comparisons with ground truth solutions.
translated by 谷歌翻译
随机成分优化(SCO)引起了人们的关注,因为它在重要的现实问题上的广泛适用性。但是,SCO上的现有作品假设解决方案更新中的投影很简单,对于以期望形式的约束(例如经验性的条件价值危险约束),该预测无法保留。我们研究了一个新型模型,该模型将单层期望值和两级组成约束结合到当前的SCO框架中。我们的模型可以广泛应用于数据驱动的优化和风险管理,包括规避风险的优化和高音阶组合选择,并可以处理多个约束。我们进一步提出了一类Primal-Dual算法,该算法以$ \ co(\ frac {1} {\ sqrt {n}} $的速率生成序列,以$ \ co(\ frac {1}级别组成约束,其中$ n $是迭代计数器,在预期值约束的SCO中建立基准。
translated by 谷歌翻译
基于决策树(DT)的分类和回归思想,最近提议在总体分类和回归任务中提供更高的性能。以更高的计算复杂性为代价,达到了其性能的改进。在这项工作中,我们研究了两种加速SLM的方法。首先,我们采用粒子群优化(PSO)算法来加快对当前尺寸的线性组合表示的判别尺寸的搜索。线性组合中最佳权重的搜索在计算上很重。它是通过原始SLM中的概率搜索来完成的。 PSO的SLM加速需要减少10-20倍的迭代。其次,我们利用SLM实施中的并行处理。实验结果表明,加速的SLM方法在训练时间中达到577的速度系数,同时保持原始SLM的可比分类/回归性能。
translated by 谷歌翻译
斑块测定是量化复制能力裂解病毒体浓度的黄金标准方法。加快和自动化病毒斑块分析将显着受益于临床诊断,疫苗开发以及重组蛋白或抗病毒药的产生。在这里,我们使用无透明全息成像和深度学习提出了快速且无染色的定量病毒斑块测定法。这种具有成本效益,紧凑和自动化的设备可显着减少传统斑块测定所需的孵化时间,同时保留其优于其他病毒定量方法的优势。该设备以每次测试井的对象捕获〜0.32 Giga像素/小时的相位信息,以无标签的方式覆盖约30x30 mm^2的面积,完全消除了染色。我们使用Vero E6细胞和囊泡气孔病毒证明了这种计算方法的成功。使用神经网络,此无染色装置最早在孵育后5小时内自动检测到第一个细胞裂解事件,并以100%的形式达到了> 90%的检测率(PFU)与传统的斑块测定法相比,特异性在<20小时内,可节省大量时间,而传统的牙菌斑测定时间约为48小时或更长时间。该数据驱动的牙菌斑测定还提供了量化细胞单层感染区域的能力,比标准病毒斑块分析的病毒浓度大10倍,对PFU和病毒感染区域进行自动计数和定量。这种紧凑,低成本的自动PFU定量设备可以广泛用于病毒学研究,疫苗开发和临床应用
translated by 谷歌翻译
我们提出了一个大规模的真实世界和干净的图像对数据集,以及一种从图像中降低降解的方法,从图像中降低了降解。由于没有用于降低的现实世界数据集,因此当前的最新方法依赖于合成数据,因此受SIM2REAL域间隙的限制。此外,由于没有真实的配对数据集,严格的评估仍然是一个挑战。我们通过通过对非鼻子变化的细致控制收集第一个真实的配对数据集来填补这一空白。我们的数据集对各种现实世界的雨水现象(例如雨条和雨水积累)进行了配对的培训和定量评估。为了学习对雨现象不变的代表,我们提出了一个深层神经网络,该网络通过最大程度地减少雨水和干净图像之间的雨水不变损失来重建基础场景。广泛的实验表明,所提出的数据集使现有的DERAINER受益,我们的模型可以在各种条件下对真实雨水图像的最先进方法优于最先进的方法。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
对象姿势预测的最新进展为机器人在导航期间构建对象级场景表示形式提供了有希望的途径。但是,当我们在新颖环境中部署机器人时,分发数据可能会降低预测性能。为了减轻域间隙,我们可以使用机器人捕获图像作为伪标签的预测在目标域中进行自我训练,以微调对象姿势估计器。不幸的是,姿势预测通常是折磨的,很难量化它们的不确定性,这可能会导致低质量的伪标记数据。为了解决这个问题,我们提出了一种猛烈支持的自我训练方法,利用机器人对3D场景几何形状的理解来增强对象姿势推断性能。将姿势预测与机器人探光仪相结合,我们制定并求解姿势图优化以完善对象姿势估计,并使伪标签在整个帧中更加一致。我们将姿势预测协方差纳入变量中,以自动建模其不确定性。这种自动协方差调整(ACT)过程可以在组件级别拟合6D姿势预测噪声,从而导致高质量的伪训练数据。我们在YCB视频数据集和实际机器人实验中使用深对象姿势估计器(DOPE)测试我们的方法。它在两种测试中的姿势预测中分别达到34.3%和17.8%的精度提高。我们的代码可在https://github.com/520xyxyzq/slam-super-6d上找到。
translated by 谷歌翻译